Bayesian variable selection for linear regression with the κ-G priors
نویسندگان
چکیده
In this paper, we introduce a new methodology for Bayesian variable selection in linear regression that is independent of the traditional indicator method. A diagonal matrix $\mathbf{G}$ introduced to prior coefficient vector $\boldsymbol{\beta}$, with each $g_j$'s, bounded between $0$ and $1$, on serves as stabilizer corresponding $\beta_j$. Mathematically, promising has $g_j$ value close $0$, whereas an unpromising $1$. This property proven paper under orthogonality together other asymptotic properties. Computationally, sample path obtained through Metropolis-within-Gibbs sampling Also, give two simulations verify capability selection.
منابع مشابه
Mixtures of g-priors for Bayesian Variable Selection
Zellner’s g-prior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this paper, we study mixtures of g-priors as an alternative to default g-priors that resolve many of the problems with the original formulation, while maintaining the computational tractability that has made the g prior so popular. We present theoreti...
متن کاملMixtures of g-priors for Bayesian Variable Selection
Zellner’s g-prior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this paper, we study mixtures of g-priors as an alternative to default g-priors that resolve many of the problems with the original formulation, while maintaining the computational tractability that has made the g-prior so popular. We present theoreti...
متن کاملg - priors for Linear Regression
where X is the design matrix, ∼ N (0, σI), and β ∼ N (β0, gσ(XTX)−1). The prior on σ is the Jeffreys prior, π(σ) ∝ 1 σ2 , and usually, β0 is taken to be 0 for simplification purposes. The appeal of the method is that there is only one free parameter g for all linear regression. Furthermore, the simplicity of the g-prior model generally leads to easily obtained analytical results. However, we st...
متن کاملBayesian linear regression and variable selection for spectroscopic calibration.
This paper presents a Bayesian approach to the development of spectroscopic calibration models. By formulating the linear regression in a probabilistic framework, a Bayesian linear regression model is derived, and a specific optimization method, i.e. Bayesian evidence approximation, is utilized to estimate the model "hyper-parameters". The relation of the proposed approach to the calibration mo...
متن کاملBayesian Approximate Kernel Regression with Variable Selection
Nonlinear kernel regression models are often used in statistics and machine learning due to greater accuracy than linear models. Variable selection for kernel regression models is a challenge partly because, unlike the linear regression setting, there is no clear concept of an effect size for regression coefficients. In this paper, we propose a novel framework that provides an analog of the eff...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematics for Applications
سال: 2022
ISSN: ['1805-3610', '1805-3629']
DOI: https://doi.org/10.13164/ma.2022.11